Dynamic Precision Analog Computing for Neural Networks

نویسندگان

چکیده

Analog electronic and optical computing exhibit tremendous advantages over digital for accelerating deep learning when operations are executed at low precision. Although architectures support programmable precision to increase efficiency, analog today only a single, static In this work, we characterize the relationship between effective number of bits (ENOB) processors, which is limited by noise, bit quantized neural networks. We propose extending dynamic levels repeating averaging result, decreasing impact noise. To utilize precision, method each layer pre-trained model without retraining network weights. evaluate on subject shot thermal weight noise find that employing reduces energy consumption up 89% computer vision models such as Resnet50 24% natural language processing BERT. one example, apply shot-noise homodyne simulate inference an 2.7 aJ/MAC 1.6 BERT with ${< }2\%$ accuracy degradation, implying unlikely be dominant cost.

برای دانلود باید عضویت طلایی داشته باشید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Analog Neural Networks of Limited Precision I: Computing with Multilinear Threshold Functions

Experimental evidence has shown analog neural networks to be ex~mely fault-tolerant; in particular. their performance does not appear to be significantly impaired when precision is limited. Analog neurons with limited precision essentially compute k-ary weighted multilinear threshold functions. which divide R" into k regions with k-l hyperplanes. The behaviour of k-ary neural networks is invest...

متن کامل

Computing with dynamic attractors in neural networks.

In this paper we report on some new architectures for neural computation, motivated in part by biological considerations. One of our goals is to demonstrate that it is just as easy for a neural net to compute with arbitrary attractors--oscillatory or chaotic--as with the more usual asymptotically stable fixed points. The advantages (if any) of such architectures are currently being investigated...

متن کامل

Analog versus discrete neural networks

We show that neural networks with three-times continuously differentiable activation functions are capable of computing a certain family of n-bit boolean functions with two gates, whereas networks composed of binary threshold functions require at least omega(log n) gates. Thus, for a large class of activation functions, analog neural networks can be more powerful than discrete neural networks, ...

متن کامل

Analog Neural Networks as Decoders

Analog neural networks with feedback can be used to implement l(Winner-Take-All (KWTA) networks. In turn, KWTA networks can be used as decoders of a class of nonlinear error-correcting codes. By interconnecting such KWTA networks, we can construct decoders capable of decoding more powerful codes. We consider several families of interconnected KWTA networks, analyze their performance in terms of...

متن کامل

Analog Computation Via Neural Networks

We pursue a particular approach to analog computation, based on dynamical systems of the type used in neural networks research. Our systems have a fixed structure, invariant in time, corresponding to an unchanging number of “neurons”. If allowed exponential time for computation, they turn out to have unbounded power. However, under polynomial-time constraints there are limits on their capabilit...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: IEEE Journal of Selected Topics in Quantum Electronics

سال: 2023

ISSN: ['1558-4542', '1077-260X']

DOI: https://doi.org/10.1109/jstqe.2022.3218019